I’m a little confused by this post, in that it seems to be a little all over the place and comes off as a general “how things weren’t so good, but are better now and there’s a happy end” story. It says that it’s about agency, but several of the problems involved (e.g. there being two factions with contradictory goals in the original group, the fact that the purchasing negotiations were complex) have no obvious connection to people being agenty.
I also have only a rough guess of what’s meant by “agenty” in the first place, which might contribute to my confusion. I think this post could benefit if it explicitly gave a definition for the term and more clearly linked the various parts of the story with that definition. There are already some parts where the connection to agency is made quite clearly—e.g. “Today I know that if an agenty person has to write bylaws and they don’t have experience, they go off and read about how to write bylaws”—but they’re currently the exception rather than the norm.
I’m also not entirely convinced that “agency” is the best possible way of characterizing the events in question. For instance:
I appreciate the very few people who came to all of the meetings, and the people who actually put down their money and committed who didn’t come to meetings. Even the people who just did a little, took on a risk that other people didn’t, they did a lot more than the people who did nothing.
This sounds to me like “some of the people were more motivated by this goal than others” could be both a more accurate and a more useful description than “some were more agenty than others”. More useful in the sense that the first description allows for the possibility of asking questions like “Might these people work harder if they were more motivated? Could they be more motivated? Is this lack of motivation a sign of the fact that they’re not so strongly on board with this anyway, suggesting troubles later on?” and then taking different kinds of action based on the answers you get. The second description feels more likely to just make one shrug their shoulders and think “eh, they’re just not sufficiently agenty, that’s too bad”. Or to put it in other words, the “agent” characterization sounds like an error model of people rather than a bug model.
So I think that this post would also benefit from not just defining “agenty”, but also saying a few words about why we should expect this to be a useful and meaningful concept.
Thanks for taking t he time to write out these thoughtful feedback/questions.
I’m a little confused by this post, in that it seems to be a little all over the place and comes off as a general “how things weren’t so good, but are better now and there’s a happy end” story.
I needed an example to make my point, and the founding of Tortuga was the one I came up with. That particular story was all over the place and a mess, which is kind of the point. Real life is messy. The whole thing was a big mess, that Patri and I and group somehow managed to persist through and make work.
The ending I was shooting for was more appreciation of people like Patri, especially those in this community, and both inspiration and caution regarding agency. Its really really really hard, and some people do it. If you try it and you’re not used to it, you’ll probably fail immediately. This is to be expected, and if you really want to be an agent, you don’t give up and let that stop you, like it would for most people.
but several of the problems involved (e.g. there being two factions with contradictory goals in the original group, the fact that the purchasing negotiations were complex) have no obvious connection to people being agenty.
Yes, that was just an example of the stupid crap that came up in this particular case. How we dealt with it was agenty—we didn’t just let it destroy the project—Patri did research, I figured out how our case was like an example in his research, and he figured out a solution to the problem we identified. In most cases, when a group got stuck with something like two factions, it would simply fail, and that would be the end of the project.
Sorry about the lack of definition of agency—its a term used very frequently by Less Wrong types I hang out with, so I figured it was common and safe lingo to use. I should have known better since I also had someone ask in another post. Here’s my quick answer:
Taking personal responsibility for making things happen. Observing opportunities and going for them. Taking risks.
I’m also not entirely convinced that “agency” is the best possible way of characterizing the events in question. For instance:
I appreciate the very few people who came to all of the meetings, and the people who actually put down their money and committed who didn’t come to meetings. Even the people who just did a little, took on a risk that other people didn’t, they did a lot more than the people who did nothing.
I don’t think I said anything about comparing agency, or that every single thing I wrote was specifically and directly about agency—you are arguing with a claim I didn’t make. Writing that was an attempt to show appreciation for people doing anything, since most people in my experience do absolutely nothing to make things happen outside of societal norms. Its frustrating that everyone doesn’t do more, but I do want to give at least some positive reinforcement for doing anything. If it hadn’t been for the people who came to meetings, nothing would have happened, in the same way that if Patri hadn’t been there, nothing would have happened. Its just that people coming to meetings happens much more often than Patris. Feel free to ask clarifying questions on this—I realize its not the most elegantly written, and I’m not quite sure how to get at exactly what you’re after.
So I think that this post would also benefit from not just defining “agenty”, but also saying a few words about why we should expect this to be a useful and meaningful concept.
People who are agenty are people who make shit happen. Amazing things don’t just happen by themselves. That’s why the world doesn’t function like how we can all imagine it doing in more ideal situations. To really make the world as awesome as it could be, we need more agents. And the agents we do have, are almost all struggling with the sort of problems that happened in the founding of Tortuga that I described. The problems are different in different situations, but generally, there is a very small number of people on any given project that are really thinking about it, and acting with intention, keeping the big picture in mind, and they have to manage everything and everyone else, and this is very challenging.
I feel like I should edit this more since these are such good questions, but unfortunately I don’t have the energy for it right now and am unlikely to in the near future. I hope this helps!
I think what Kaj is responding to, is that the post doesn’t have the abstract clarity of purpose of a typical post in the Main forum. It’s more of a personal history and a passionate exhortation to reward agency when it appears within the LW community. It’s a bit out of line for me to play LW front-page style-pundit, when I am mostly a careless creature of Discussion and have no ambition to shape LW’s appearance or editorial norms, and I even sort of like the essay as it is; but it probably does deserve a rewrite. (It’ll get twice as many upvotes if you do it really well.)
Its true, my writing is not as high quality as most of the top level posts. I’m not a professional writer at all. Although I did get someone good to edit this for me, so its much better than it would have been without that.
I don’t know of anyone who is a better writer than I am who understands and cares enough about this content enough to put it out there, so I did it myself. If you or anyone you know who is a better writer would like to do a rewrite, by all means, I would love for them to do it!
I don’t think it’s the general quality of your writing that’s causing problems; I think it’s a particular, specific flaw in this essay. Compare this comment thread to the one under ‘How To Deal With Depression’—there’s agreement and there’s disagreement, but unlike in this comment thread there’s no deep confusion about what your point is and how your essay supports it.
So what is that flaw? My theory is that ‘agentiness’ is psychological phlogiston, an imprecise non-explanation which should be purged from our collective vocabulary with great force. Taboo it, decompose it and retry.
If I’m right about the problem but wrong about the solution, my next best guess is that you’ve chosen too complicated an anecdote. I can see why you wouldn’t want to expand on the hospital story specifically, but something about that size might work better.
The ending I was shooting for was more appreciation of people like Patri, especially those in this community, and both inspiration and caution regarding agency. Its really really really hard, and some people do it. If you try it and you’re not used to it, you’ll probably fail immediately. This is to be expected, and if you really want to be an agent, you don’t give up and let that stop you, like it would for most people.
In all seriousness, though, why bother? As long as there are colossi striding the world, what possible affect will us mere mortals have?
In general, agency provides its own rewards. I’m more curious what kind of teleological narrative us mere mortals can maintain, in the face of people who are simply objectively better than us at getting shit done no matter what?
What influence do average people have on anything that actually matters, compared to people like Patri or Eliezer?
As someone who has met Patri and Eliezer (and many other heroes besides), I can tell you this: they are men of flesh and blood, with their own insecurities and fears. And I can tell you that they cannot do it alone—why else would Patri have started the Seasteading Institute, or Eliezer Less Wrong? They have both put significant labor into building communities, support networks, and organizations because they need the help of ‘average people’.
They are impressive. Let’s strive to emulate their best qualities. But to the extent that we wait in the shadows, waiting for for them to fix the world for us, we also sabotage their efforts. They need us. They need you.
I’d also recommend you take a look at this diagram.
That assumes that the individual is in control of their own mindset.
Mindsets arise through an interaction of the individual and their environment. The individual’s social environment, in particular, plays a strong role in determining one’s view of challenges and opportunities, of flaws and capabilities, and of agency and fate.
In the absence of warmth, sunlight, nutrients and water, a seed will not grow, even if it is (genetically) a perfectly formed and hardy seed. In the absence of resources and adequately-scaled challenges, a mind will not flourish, even if it is (genetically) a perfectly formed and clever mind.
You sound like you’re making excuses for not trying to do things. It seems like you’re trying to defend your belief that you’re incapable, because admitting that you don’t have to be would mean you’d a) have to do something difficult like try things, b) have to face the potential for failure, and c) have to admit that you’ve been wasting your time working on things that don’t matter as much as what you could be working on.
Secondly—Less Wrong isn’t the worst environment for nurturing your mindset. For all the inaction we have around here, we at least have some pretty good memes (see the Challenging the Difficult sequence).
Anyway—I think you’ll improve your mindset as soon as you want to. I’m going to get back to trying to help.
My sense was that he was discussing one’s ‘environment in general’, and I was recommending thinking of LW as part of his environment, since it has some good memes. I wasn’t trying to correct a misunderstanding of LW, but rather encourage him to absorb good memes from LW.
Colossi are better at getting shit done when surrounded by a legion of supporters, than when alone. Any given member of that legion may be interchangeable or even ultimately dispensable, but each has a marginal contribution to make.
True. I guess my own personal narrative has taught me to be extremely distrustful of any role where I am ultimately dispensable and interchangeable—I’m tired of being reassigned to bus-axle greasing duties while the bus is still rolling.
I think the idea of near mode and far mode might be useful in formulating a definition. Something like, “a person is like an agent to the extent that they consistently and intelligently work towards accomplishing far-mode goals”.
Exasperation. (Especially exasperation that someone else is not being agenty, about something that I could just as easily take over and get done myself.)
Ugh-field sensations.
Creative “stuckness”. Talking to a beta reader almost always clears up this problem for me inside of fifteen to twenty minutes even if the beta doesn’t actually have anything to say, and I still don’t instantly grab one and start yammering when I feel it.
Non-strategic “but I don’t know how to do X!” (This is sometimes useful strategically, though.)
I’m insufficiently “agenty” in the following situations:
-When my working memory gets too full, i.e. when I’m doing a clinical at the hospital and my mental “to-do” list gets too long, I stop caring whether I’ll get everything done, how I’ll get everything done, whether I know how to do everything I’m supposed to do, etc. I then become a little obedient puppy who follows people around waiting until they tell me to do things.
-Whenever my mental response to a situation is “are you serious?”, my actual response is likely to be less than enthusiastic.
-Feeling embarrassed because of someone else’s behaviour-similar to exasperation. I don’t know if this is a conditioned response to not getting along with fellow members of group projects, but whenever I’m watching someone else struggle because they’re unprepared, say/do something stupid, etc, my motivation to make an effort drops to zero.
By a 5 second estimate this might be able to interact usefully with some of those phone apps that ask you aabut stuff at random intervals. At random every few minutes, ask if you’re feeling any of those things, have you click “yes” or “no”, and if yes it prompts with standard response you should have.
Often, if people fail to be agenty in the context that you are interested in, they are simply saving their energies to be agenty in another.
As to 95% failure rate: It might have been for the best that many of those projects did not proceed. Just because you’ve started something doesn’t mean you should finish it.
Do you have any particular reason to believe that most people are really living at ideal agency levels and simply “saving” it for the things that matter? I’m pretty positive I’m both distinctly above-average in agency (judged by largely having a successful life and resolving my complaints with it over time), and still have fairly severe failures.
At least for me, “I don’t have the energy to accomplish X now, but I’ll put it on my list for when I can” and “I didn’t realize I could accomplish X” are very different states, and it seems like he average person has only a minimal sense of the former.
I don’t think it’s just people being stupid. I mean, if I try to become a PC, I will die. (I will abandon my support structures, become unable to sustain effort before building new ones, get horribly sick, and die.) Many people have big losses on failure (internal or external), like responsibility to family.
Still, since you’re a PC who knows lots of PCs: how do people, in practice, go about things like “At the age of 12, Badass Example left her village in East Timor to join the French Resistance. After assassinating Hitler, she founded the Donner Party and was elected President of Belgium.”? I don’t think you can just look up “How to locate and join a secret group” on eHow.
I can understand your concern, given how you are seeing PC. PC does not mean you have to do any specific thing. So it by no means includes a definition that you should abandon your support structures and all that. To me, a PC is someone who makes conscious choices to honor the values they most care about. They tend to see novel solutions to problems because they are willing to consider anything that will solve the problem. They do tend to be less risk averse, but ideally they are not stupid about it :)
Its a well known bias that people are naturally much more motivated by fear of pain the seeking pleasure (I’ve heard figures of 2 or 3 to 1 pain avoidance v.s. pleasure seeking). This is not how I want to live my life, so have taken steps to correct my psychology for this, to optimize for maximal utility over minimal suffering.
As far as how to do this, there are a lot of personal growth gurus out there happy to teach you things. Landmark is the cheap version that is everywhere in the US, and I can recommend several people in California, New York, and Canada who I know to be especially good, depending on what specialty you are most interested in, and what your tolerance for woo is. A lot of LWers are very intolerant of woo, which in my opinion is throwing the baby out with the bathwater since I think that community does provide a lot of genuine value, so YYMV.
Something’s not getting through. I know you understand how depression works so I’m sort of at a loss here. I don’t think I have any options other than “Never do anything out of the ordinary” and “Bite off more than I can chew, then jump ship when everything comes crashing down, abandoning any people I was supposed to work with, and neglect everything while recovering from exhaustion”.
Your vision of being PC is different than mine—we have very different basic assumptions. I don’t think that there is anything in particular required, other than making conscious choices. So there’s nothing requiring you to bite off more than you can chew or to abandon anyone. I would recommend switching to a more PC mode to be done in small steps for most people in most situations. Just try to change one mental habit at a time at first. Pick the lowest hanging fruit. Talking to some sort of coach is very helpful if you can afford it, for help with deciding what to prioritize and having accountability. Does that help?
Not in the least. The only way I can interpret your “anything in particular required, other than making conscious choices” is adding “and I consciously choose to do so” after “I’m sick like a dog, there’s no way I’m going to class, or doing anything more tiring that collapse back into bed with Downton Abbey fanfic”. Can I have an example, preferably of a very small step?
I’m not quite sure I’m parsing what you’re saying correctly. I’ll give it a try. I would say that if you are genuinely sick, making a conscious choice regarding that would often be to do what is required to recover quickly, so going to bed is quite reasonable. Other agenty things to do about sickness would be to take vitamin D or other remedies that you’ve determined will help you overthrow the cold faster. I also consider whether or not to take Dayquil or Nyquil, since even though my understanding is that they don’t actually help you get over the cold faster, they do often help with work and sleep, so I actively consider whether it is best to be more highly functioning while sick v.s. focusing on speed of recovery to make this choice.
There was a time when I had major surgery, for stage 4 endometriosis, and wanted to go to a wedding precisely a week afterward, when I was told recovery was 2-3 weeks. I was told that I probably shouldn’t fly but sometimes people recover early enough. So what I did in that case, was to focus on recovery as hard as I could for the five days after the surgery. When the nurse asked me if I wanted to go home from the hospital as soon as possible, I said that I wanted to stay right where I was for the full time I was allowed that was prepaid for my stay before having to spend the night. Why would I cause extra trauma to my body while it was in early stages of recovery, just to be in a familiar environment. Then, after the first 24hrs where I slept as much as possible but did some walking around every hour to be careful about blood clots, for the next four days I hardly got out of bed and took sleeping pills to encourage myself to stay in bed. I did this until it was time to pack, at which I got up, packed, got on a plane, and was actually recovered enough by the wedding to be able to dance!
That is an example of agentiness as I see it, because I was working within the constraints I had, and actively thinking about and doing what would cause me to recover most quickly because of my goal of making it to the wedding.
Your surgery recovery example is weird, because (as you describe it) the nurse came to you and asked you to make a choice with well-defined options (any length of time between “as soon as possible” and “as late as is paid for”) with consequences that were already salient to you. That’s more agenty than “go with whatever the nurse suggests”, but I think most of us can make choices when handed a menu.
Let me take a very stupid example. You want some bylaws written. You look up “how to write bylaws”, and notice there’s a lot to it. You estimate you’ll become able to write bylaws after 200 hours of research. The options that immediately occur to you are:
Research bylaws as much as you are physically able to. This leads to about six hours of learning about bylaws, followed by a daze where you read the same sentence over and over again for ten hours until you can drag yourself into bed, followed by a few entirely unproductive days.
Research bylaws for a couple hours each day, then walk away while you are still fresh. This requires 100 productive days, which, adding days where you have to do something more important and entirely unproductive days, represents about a year. A year later, Patri has written perfectly good bylaws and has started looking for housing and your knowledge is useless.
Chuck non-vital projects like “write bylaws” and focus entirely on becoming more productive. Ten years later, wonder why you haven’t done anything with your life. Drown your crushing sense of failure in whichever drug you determine costs the fewest QALYs.
Think until you find a better option.
Your mental fog is too heavy to decide, so you stretch and stagger into the kitchen for a drink of water. The light bulb needs changed, but your knee and balance are acting up so you save that for later. After a drink, a light snack, and a few minutes forcing yourself into motion, you manage to get yourself to shower. Afterwards, you feel able to think clearly.
I actually find examples like the surgery thing quite frequently in life—the most unusual thing about it may be the way I framed it. I notice options and possibilities and win/win scenarios for making unusual agreements where most people don’t.
With the hospital example, I think the nurse just asked me if I wanted to go home, as opposed to giving me a list of options and implications, although I do not have a recording of the conversation.
Regarding more complex examples, depends on things like opportunity cost. One of the first things I would do would be to discuss with Patri and other agents in the group. When you have multiple agents, you can optimize among everyone’s good ideas, and if you cooperate, you don’t end up with situations like case #2 where Patri and I duplicate work.
There was a time when I had major surgery, for stage 4 endometriosis, and wanted to go to a wedding precisely a week afterward, when I was told recovery was 2-3 weeks. I was told that I probably shouldn’t fly but sometimes people recover early enough. So what I did in that case, was to focus on recovery as hard as I could for the five days after the surgery. When the nurse asked me if I wanted to go home from the hospital as soon as possible, I said that I wanted to stay right where I was for the full time I was allowed that was prepaid for my stay before having to spend the night. Why would I cause extra trauma to my body while it was in early stages of recovery, just to be in a familiar environment. Then, after the first 24hrs where I slept as much as possible but did some walking around every hour to be careful about blood clots, for the next four days I hardly got out of bed and took sleeping pills to encourage myself to stay in bed. I did this until it was time to pack, at which I got up, packed, got on a plane, and was actually recovered enough by the wedding to be able to dance!
I’ve done stuff like that plenty of times. Sometimes I’ve even done the reverse (stuff myself with medicines and stuff this afternoon so that I’ll be fine for the party tonight, even if that means I’d likely be very sick tomorrow and the day after, when I wouldn’t have much to do anyway so I wouldn’t mind staying in bed that much).
She means being proactive. If you want something to happen, you do it, or you make it happen however you can.
For example, there’s a local meetup group that we’ve had a couple meetings of—but we don’t have a schedule, we only have meetings when someone is proactive—they say “I’ve found the meeting place, the activity, I’ve emailed the people, come by, I will certainly be there, I will make sure we have a good time no matter how many people show up.” And then we have a meetup.
I’m reminded of the line from The Caine Mutiny that naval ships were designed by geniuses to be run by idiots. If you want to do something, you can either design the ship, or you can just run an existing ship. If we had schedules and a book full of activities and a large group on a mailing list, someone with all the proactiveness of a bag of rocks could make meetups happen. But we don’t have those things, so nothing happens unless someone is feeling “agent-ey.”
If we had schedules and a book full of activities and a large group on a mailing list, someone with all the proactiveness of a bag of rocks could make meetups happen. But we don’t have those things, so nothing happens unless someone is feeling “agent-ey.”
This suggests that the way to systematically make things happen is not to organize meetings, but to put in place such a system (schedules, book full of etc.) for organizing meetings. Otherwise you need someone to feel “agent-ey” every time; that doesn’t seem sustainable.
Edit: That is, if you’re a person in such a group and you’re feeling “agent-ey”, Manfred’s comment suggests that your efforts would be better spent putting a system in place that would allow things to happen without any agentness involved, as opposed to putting forth the effort to make a thing happen this one time. I’m not sure if my experience supports this; I’ll have to think about it.
This mirrors my own experience—the way I’ve found to have the most influence, and get the most done is often not being the one completing the tasks, but rather the one creating and documenting the process/procedures, and teaching and training other people to do the work.
It’s also far more lucrative from a career standpoint! :D
PC refers to “player character.” In many games, there would be many characters, most of which don’t have goals and function primarily as scenery, and PCs, who both have goals and move heaven and earth to achieve those goals.
As for “agentiness,” I think a similar term is executive-nature. They’re an entity that can be well modeled by having goals, planning to achieve those goals, and achieving those goals. Many people just react to life; agents act.
Ah, thank you. I’m quite familiar with the term, and with the PC/NPC distinction in games; I just didn’t make the connection in this context. So the idea here is that most people don’t have goals? Or have goals, but don’t act to further them?
Would you consider removing the last paragraph of this post? It reads like an overt “look at all the high-status people I know! I’m high-status too!” bid and jars with the rest of the post, which is significantly better.
Hypothesis: One reason there aren’t many agenty people is that a lot of parents find that agenty children are more of a challenge to their authority than they want and/or take more resources of various sorts than they feel they can afford.
My feeling about this is: Look at animals. They pretty much just do random stuff: hang out here, hang out there, eat when they’re hungry, etc. It’d be kind of surprising if evolution had taken us from that to industrial strength ass-kicking fast enough for civilization not to have formed in between.
(Additionally, there’s the whole near/far idea, that the intelligent parts of our brains aren’t supposed to be controlling our behavior, really.)
They pretty much just do random stuff: hang out here, hang out there, eat when they’re hungry, etc.
That sounds like the behavior of a bored domestic cat, or a bear in a mediocre zoo. Wild animals, especially clever, complex-brained ones can get up to some abstract or spontaneous stuff. Elephants hold funeral vigils for their dead (and at least sometimes for dead humans as well); orcas hunting seals will play sadistically with a captured pup for minutes or hours before getting down to the business of feeding; echidnas will go to incredible lengths to explore something novel, even after ascertaining it has no chance of providing them with food. There’s one recent funny case of an Antarctic leopard seal attempting to teach a human scuba diver how to kill penguins (in much the same way cats sometimes catch mice and leave them for their house-humans). When you add in complex social interactions, animal behavior (particularly that of any species prone to human-noticeable levels of personality variation) is quite dynamic—and even a lot of what looks like superficially simple behavior is the product of low-level drives common to many organisms being acted out in unique ways by the individual critter.
Thanks for the info. My impression was that emotions are like different modes that an animal can operate under, and you switch modes in a kind of haphazard way based on social and environmental cues, energy level, etc. Does that sound more or less accurate?
Has coherent goal-directed behavior spanning multiple days been observed in animals?
Well, that’s sufficiently vaguely-phrased that just something like a pack of wolves or orcas pursuing their quarry for days, which does happen, would seem to qualify. Or the bird building a nest as described below.
FWIW, pregnant African elephants often find a good time and place to give birth around the end of their term and then consume the leaves of a certain tree to induce labor (humans in the area use it for the same purpose). The pregnancy takes over a year and the labor itself, once begun, can take several days.
An even stronger criticism of AGI, both in agent and tool forms, is that a general intelligence is unlikely to be developed for economic reasons: specialzied AIs will always be more competitive.
Economic reasoning cuts many ways. Consider the trivial point known as Amdahl’s law: speedups are always limited by the slowest serial component. (I’ve pointed this out before but less explicitly.)
Humans do not increase their speed even if specialized AIs are increasing their speed arbitrarily. Therefore, a human+specialized-AI system’s performance asymptotically approaches the limit where the specialized-AI part takes zero time and the human part takes 100% of the time. The moment an AGI even slightly outperforms a human at using the specialized-AI, the same economic reasons you were counting on as your salvation suddenly turn on you and drive the replacement of any humans in the loop.
Since humans are a known fixed quantity, if an AGI can be improved—even if at all times it is strictly inferior to a specialized AI at the latter’s specialization—then eventually an AGI+specialized-AI system will outperform a human+specialized-AI system barring exotic unproven assumptions about asymptotic limits.
(What human is in the loop on high frequency trading? Who was in the loop when Knight Capital’s market maker was losing hundreds of millions of dollars? The answer is that no one was in the loop because humans in the loop would not have been economically competitive. That’s fine when it’s ‘just’ hundreds of millions of dollars at stake and companies can decide to take the risk for themselves or not—but the stakes can change, externalities can increase.)
You’re referring the Turing Test as a criterion and accusing us of anthroporphizing AI?
I think that an AI might become intelligent enough to destroy the human species, and still not be able to pass the Turing Test. Same way that we don’t need to mimic whales or apes before being able to kill them.
It’s not us who are anthroporphizing AI, it’s you who’re antroporphizing “intelligence that rivals and eventually surpasses the human intelligence both in magnitude and scope”.
Looks like you’ve completely missed the point of SIAI and massively misunderstand AI theory.
It seems to me like you have not even remotely the right order of magnitude of an idea of just how immense the laziness of some programmers can get. And the lazier programmers get, the more they try to write programs that do all their own work for them.
The ultimate achievement of the lazy programmer is to write a one-time program that will anticipate future needs to write programs, and write programs that can better anticipate such future needs and thus better write programs that meet the need, ad infinitum, without any further intervention from said programmer.
SIAI actually agrees that the above is probably not the most economically sensible thing to do and that it is not what most AIs, or even AGIs, developed in the near future will look like. However, SIAI is also aware that some people will, despite this, still want to be the ultimate lazy programmer and write the ultimate recursively self-modifying AI. No reasonable amount of reasonable arguments will change this fact.
Therefore, something must be done to prevent those AIs they will create from exterminating us. SIAI, in no small part through the work of Yudkowsky, has concluded that the best method of achieving this is currently through FAI research, and that eventually the only solution might be to make a Friendly self-modifying AGI before anyone else makes a non-Friendly one, so that the FAI has an unfair advantage and can outsmart any future non-friendly AGIs.
If you want to avoid being logically rude, you will contest either the premises (1: Some people will attempt to make the ultimate AGI. 2: One of them will eventually succeed. 3: Ultimate AGIs are Accidentally Unfriendly by default.) or some element in the chain of reasoning above. If you fail to do so, then the grandparent comment is understating how much you’re missing the point and sidetracking the discussion.
I think there’s a bit of a chicken-and-egg problem when you’re not much of an agent yet, and you haven’t accomplished anything interesting under your own steam, so it doesn’t really even seem worthwhile to plan anything out. (Another failure mode: it does seem worthwhile to plan things out, but only because you haven’t yet noticed that you rarely work on any of your plans.) Probably it makes sense to debug what’s making you ineffective and build up a track record before tackling anything really big (see success spirals).
If you’re one of those people who makes plans but never works on them, it might be a good idea to start being very distrustful of yourself whenever you say you’re going to do something. Concrete example: Maybe there’s some topic you’ve been intending to study for a while. If you were distrustful of yourself, instead of just continuing to intend to study it, you might block out some time during your week, then set up reminders on your cell phone, rules for when you can skip your study period, rules for when you’re allowed to abandon the project entirely, etc.
The problem with these behavior regulation devices is that building them takes a large activation energy. Here are my solutions for this so far:
Jot down ideas for a device whenever you have them, even if you aren’t going to implement them immediately.
Wait until you’re in an especially energetic or inspired mood, then take advantage of it and implement a few devices (or debug ones that failed).
Have the devices come in to effect a while after you’ve finished building them (ex: build your device in the evening and have it activate the next morning).
Consume stimulants like coffee or kratom.
In my experience, after using such devices for a while I no longer needed them as much.
Of course, there are other components to being an agent, e.g. fearlessness. The modern world is pretty safe, but evolution calibrated us for a world that was much more dangerous, especially with regard to social blunders.
I’m wondering if what you call agency and what EY calls PC (or, in extreme/fictional cases, heroic responsibility) is what the rest of the (English-speaking) world calls initiative/motivation/perseverance?
what the rest of the (English-speaking) world calls initiative/motivation/perseverance?
I don’t think it’s quite the same thing; nor do I think your three choices are synonymous or mean the same things.
EY used to use a term ‘anti-sphexishness’, based on Hofstadter’s description of a sphex wasp in GEB who executes its nesting program endlessly if someone messes with it, which seems to be synonymous with ‘PC’ or ‘heroic responsibility’, but which one would certainly not describe as ‘motivation’ or ‘perseverance’ - after all, the sphex wasp endlessly executing its program displays motivation and perseverance beyond any mere human! (Motivation and perseverance beyond that, in fact, of the biologists who was messing with it in Hofstadter’s description.)
Or take this example: an Asian kid is told by his parents to become a doctor, and after endless studying gets into med school, graduates, does his internship etc and becomes a full-fledged doctor. As expected, such things correlate with Conscientiousness, the doctor could fairly be described as having ‘motivation’ and ‘perseverance’ - but did he display ‘initiative’?
Similarly we can think of examples of people who display ‘initiative’ and ‘motivation’ but not perseverance (think ADHD) while also not especially being ‘PC’ - they follow their whim in choosing topics of interest (initiative, because certainly no one told them to pick said topics), and they prosecute said topic with great energy and intensity (motivation), but this leads to no lasting change and represents no deep thought about their goals, preferences, and the state of the world.
I’m not sure. The meaning of PC/anti-sphexishness/heroic-responsibility seems to be a sort of stepping outside of routine and comparing the status quo with the original intrinsicly desirable goals the status quo was supposed to achieve, and taking action to remedy the discrepancies.
You could call this ‘visionary’ or ‘philosophical’ or ‘righteous’ or ‘wise’ but none of them seem right to me—probably why those 3 terms were invented. ‘Enlightened’ comes close but only if you were living 2 centuries ago, because these days ‘enlightened’ is sarcastic or religious in undertones. (That is, figures like Voltaire were ‘enlightened’ but also definitely reminiscent of the 3 terms.)
Right after WWII John Holt
was finishing out his U.S. Navy tour of duty on the west
coast. In Never Too
Late
he tells this story about his favorite band at the time, the
Woody Herman
Herd, who he had
listened to only on records:
They had been playing on the East Coast, and one of the
many reasons I was eager to get out of the Navy was so
that I could go hear them. Just as I was getting close to
the date of my discharge, I heard terrible news—the
Herman band was going to come to the West Coast to play
for a couple of months, and was then going to break up. I
was going to miss them! I would never hear them! I was
such a timid and conventional young man that it never
occurred to me, not for a second, that I might stay out on
the West Coast, arrange to get discharged there, see
something of California and the Northwest, and hear Woody
Herman in the process. But no, my home was in the East,
and when the war ended I had to go home.
(Luckily he did manage to schedule his trip back east so
that he crossed paths with the Herd in Chicago.)
As you say, getting things to happen takes a lot of time and effort, which is one of the things I learned when working on group projects in school. I think that most people, when they realize just how much effort it takes to Get Stuff Done, usually end up saying “screw it, there are other things I’d rather be doing” and go on doing whatever it is that they were doing before.
Thing is, getting things to happen doesn’t actually take a LOT of time and effort (depending on what you’re trying to make happen.). The difference between something happening and not can be as simple as making a facebook event and inviting a bunch of people. The key is that someone has to take responsibility, however little or however much effort that is, to say “I will be the one to try and make this happen” instead of saying “Man this should really be a thing”
Never ask. Asking a bunch of people about that will end up taking forever and have no conclusive answer. Schedule events when convenient for what you think will be multiple people sometime in advance, but with a concrete time and place. People will happily give you input after you do this and you can change it later, but this is much more reliable.
I think the biggest problem with group projects in school is that a) the people you’re working with aren’t pre-filtered for motivation, and b) you’re working on toy problems that, likely, no one in the group would bother to do if it weren’t assigned. Some people want to get 90% because they have a scholarship and need to maintain a certain GPA, but some people are happy just to pass and want to do the minimum of work, and pretty much everyone has the attitude that “it’s just school”.
I’ve had group projects break down both because I was the most motivated person in the group (and lacked the leadership/interpersonal skills to deal with this), and because I was the least motivated person in the group (I’m somewhat spoiled and used to getting As without putting in too much time, and I was not prepared to have 3-hour group meetings starting at 8 pm after class.)
In general, I would expect group projects to run more smoothly in the workplace, both because people are more motivated–they’re in their chosen field, they’re getting paid, they’re working on a real world problem, etc–and because the process of getting hired filters for interpersonal skills, which isn’t the case for getting accepted into college or university, so you’re less likely to end up with people who can’t work with other people.
Grade school didn’t assign me many group projects. In fact, I can only remember one. And on that one I think I tried cooperating for like 15 minutes or something and then told the other two kids, “Go away, I’ll handle this” because it was easier to just do all the work myself.
Sometimes our early life experiences really are that metaphorical.
Similar, possibly relevant anecdote from my own life:
In a recent computer science class I took, we (the entire class as a whole) were assigned a group project. We were split up into about 4 sub-groups of about 6 people each; each sub-group was assigned a part of the project. I was on the team that was responsible for drawing up specifications, coordinating the other groups, testing the parts, and assembling them into a whole.
Like Eliezer, I quickly realized that I could just write the whole thing myself (it was a little toy C++ program). And I did (it took maybe a couple of days). However, the professor was (of course) not willing to simply let me submit a complete project which I had written in its entirety; and since the groups were separated, there was no way for me to submit my work in a way that would plausibly let me claim that any sort of cooperation had taken place.
So I had to spend the rest of the semester trying to get the other groups to independently write the code that I had already written, trying to get them to see that the solutions I’d come up with were in fact working ones, and generally having conversations like the following:
Clueless Classmate: How should we do this? Perhaps ?
SaidAchmiz: Mmm… perhaps we might instead try .
CC: That doesn’t make any sense and will never work!
SA: Sigh.
I’m not quite sure what this could be a metaphor for, but it certainly felt rather metaphorical at the time...
My question is: are there some straightforward heuristics one can apply to find/select a workplace where such things occur as little as possible? At what kinds of places can one expect more of this, and at what kinds less? The effort to find a workplace where you do NOT have to handle such situations seems like it would be more effective in the long run (edit: that is, more effective in achieving happiness/sanity/job satisfaction) than learning to deal with said situations (though of course those things are not mutually exclusive!).
My question is: are there some straightforward heuristics one can apply to find/select a workplace where such things occur as little as possible? At what kinds of places can one expect more of this, and at what kinds less?
Yes, and it is an extremely high expected-value decision to actively seek out people who understand which workplaces are likely to be most suitable according to this and other important metrics.
Grade school didn’t assign me many group projects. In fact, I can only remember one. And on that one I think I tried cooperating for like 15 minutes or something and then told the other two kids, “Go away, I’ll handle this” because it was easier to just do all the work myself.
Sometimes our early life experiences really are that metaphorical.
And sometimes they aren’t. As a ‘grown up’ you outright founded (with assistance) an organisation to help you handle your new project as well as playing a pivotal role in forming a community around a relevant area of interest. Congratulations are in order for learning to transcend the “just do myself” instinct—at least for the big things, when it matters.
It seems to me that “selecting people with whom I can usefully cooperate” is different from “learning to cooperate with arbitrarily assigned people”. Do you think that captures the distinction between Eliezer’s grade school anecdote and his later successes, or is that not a meaningful difference?
It seems to me that “selecting people with whom I can usefully cooperate” is different from “learning to cooperate with arbitrarily assigned people”. Do you think that captures the distinction between Eliezer’s grade school anecdote and his later successes
It is certainly a significant factor. (Not the only one. Eliezer is also wiser as an adult than he was as a child.)
Math over metaphor. This is a common experience. Assume a child 99th percentile of age group by intelligence
Elementary school is assigned by geography, so average intelligence is 50th percentile (or fairly close)
By middle school there may not be general tracking for all students, but very low performers have been tracked off (say 20% of students), so average intelligence is 60th percentile
In high school they often have a high and a low track, split the students 50⁄50 and you get average intelligence is 80th percentile
If the student then goes to a college that rejects 90% of applicants (or gets work in a similar selective profession) average intelligence is 98th percentile
and all of a sudden the student is now well socialized and has learned the important skill of cooperating with their peers.
EDIT: the above holds if you track “effectiveness” which is some combination of conscientiousness and intelligence instead of intelligence. In practice I expect most tracking systems capture quite a bit of conscientiousness, but the above reads more cleanly with intelligence in each line than “some combination of conscientiousness and intelligence”
This is a bit off-topic, but I think that the word “competence” effectively conveys the meaning of, “some combination of conscientiousness and intelligence”.
I was generally lucky with group project partners at school (both high school and college), I guess; I didn’t have much explicit conflict, and I never had one actually fall apart. There was one class in which I was in a group of three in which I did 95% of the work, one of the other two people did about 60% of the work, and the third guy basically didn’t show up, but I was okay with that. I wrote code that worked, the second guy supported me so I could write that code, and the third guy would have only gotten in the way anyway.
Edit: (Yes, that adds up to more than 100%, because of duplication of effort, inefficiency, correcting each other’s mistakes, etc. In other words, it’s a joke, along the lines of “First you do the first 80%, then you do the second 80%.”)
I interpreted that as “I did 95% of the work that I said I would do, and one of the other partners did 60% of the work that he committed to do, and the third partner didn’t do anything.” But yeah, if you interpret it as straight-up percentages, it doesn’t really add up...
I could’ve sworn there was a bit in the grandparent that acknowledged this apparent contradiction, and pointed out pair programming as the explanation, but it seems to be gone. That would, in any case, account for the percentages.
Out of curiosity, am I the only one who had experienced at least some instances of productive group work in high school and college ? I am not nearly as smart as most people here, so perhaps that fact played a role, since I actually needed the cooperation of other people in order to get the job done.
You are affirming the consequent and also overgeneralizing.
I argued that ‘some economically valuable uses of AGI are replacing humans’ (disproving Szabo’s core argument that “AGI can always be outperformed at a specific task by a specialized-AI, therefore, there are no economically valuable uses of AGI”).
That is not the same thing as ‘all replacements of humans are economically valuable uses of AGI’ for which ‘non-AGI HFTs replacing humans’ serves as a disproof (but so would cars or machines, for that matter).
Your argument is that humans will eventually become the limiting factor in economic systems thus AGIs will be needed to replace them.
Good strawmanning. Very subtle.
gwern: The moment an AGI even slightly outperforms a human at using the specialized-AI, the same economic reasons you were counting on as your salvation suddenly turn on you and drive the replacement of any humans in the loop.
That’s the key part. The specialized High-freq-trading software won’t be replaced, but the humans who use that specialized software will eventually be replaced if someone figures out how to make an AGI that can think about all the relevant variables and can be scaled to go faster and better than a human.
For instance, this Divia lady you mention and her husband even changed their last name to ‘Eden’, the name of the earthly paradise in the Bible, and married in a ceremony officiated by Yudkowsky.
A marriage ceremony, officiated by an available elder of some sort. Name changes. Wow, what sort of crazy culture or subculture would do that kind of thing? Oh, right. Most of them, in some shape or form. This was actually a hat tip towards normality, not the reverse. Like celebrating Christmas with family and friends without actually believing in a Christ.
V_V’s comments do serve as datapoints towards what elements can look cultish to outsiders even though, I agree with you,such a thing would be unfair as pretty much every community does these things.
V_V’s utterances about what looks cultish are generally useless in regards to talking about ideas: trying to shame us into not having certain ideas, just because they look bad is rather a circular and useless argument. (and frankly “transhumanism” “cryonics”, “AI apocalypse” bring to mind the low status of an SF geek, not the low status of a cult, so V_V’s words miss doubly the mark in this respect)
On the other hand practices like marriage ceremonies and cohabitations pattern-match more, and so it’s something to be careful about from a Public Relations perspective. But it’s not as if I’m sure whether they’re a net positive or a net negative all things considered; so consider my words to be hesitant and uncertain, not really sharing into V_V’s criticism...
On the other hand practices like marriage ceremonies and cohabitations pattern-match more, and so it’s something to be careful about from a Public Relations perspective.
Pattern matching and public relations are both interesting and important and using V_V as an outsider datapoint while doing so would produce unreliable results.
I have to disagree here. Even if from the outside view christian marriage or whatever is equally as weird as yudkowskian marriage, it definitely feels cultish to me and I’m an atheist. The normal way to get married is NOT by a friend of yours whose teachings you follow.
Errh. What is the normal way to get married then, from your view? Mail a letter to the nearest municipal or judicial office?
“Getting married”, once shed of all religious connotations and other nasty bits, is a social contract before witnesses published so that: 1) The spouses are more motivated to cooperate and remain at a high level of mutual affection. 2) Individuals not part of the marriage (i.e. everyone else) are aware that these spouses are “together” presumably for a long time and that they should not get in their way and they are not “available”.
I don’t think you’re using the right reference class for the question. If we’re talking about the set of people who might find Less Wrong interesting, I predict that most of them would find it more weird if two atheists from atheist families got married by a priest than if they got married by the head of an Internet community. (Most normal for that reference class is picking a celebrant who’s just a friend, or a Unitarian minister, or a comedian, etc.)
I’ve got a number of friends in non-SingInst/LW circles who’ve been married in public ceremonies overseen by friends whom they consider wise, or instrumental in their social groups, or simply good speakers. I don’t have any actual data, but in the circles I run in it seems like one of the more popular secular options.
First, I’d like to ask why you didn’t reply directly to my previous comment, and instead started an entirely separate top-level comment. I hope your motive wasn’t less than honorable, like hoping that I wouldn’t notice and people would infer that I was tacitly admitting I was refuted? Hopefully I’m just being paranoid and you were careless about posting your comment or something.
But you didn’t provide an argument for AGI being more effective at replacing humans at these tasks rather than more and more specialized AIs.
The claim “AGI will be more effective at replacing humans in using specialized-AIs” was assumed in my argument, and also not criticized by Szabo, who thinks his argument works even granting the existence of such AGI:
Even if there was such a thing as a “general intelligence” the specialized machines would soundly beat it in the marketplace. It would be very far from a close contest.
Great piece, Shannon. Brings to mind a couple of things.
What you call “agency” is, in Landmartian, “being cause in the matter,” being “at cause,” “taking a stand,” and acting “consistently with that stand.”
This is distinguished from being caught in a “racket,” defined as a persistent complaint combined with a fixed way of being. Someone caught in a racket does not take responsibility for things as they are, but rather sets up stories that express being a victim of circumstances or others. The generic alternative is to accept responsibility, as a stand, not as a “truth.”
That’s been oft-misunderstood. I am responsible for, say, the WTC attack, as a stand, not as a fact. If I’m responsible, it means that I can look at my life as missing something that might make a difference, as full of possibilities.
In any case, most people, most of the time, are not at cause, we are simply reacting.
Then, if we actually take responsibility, beyond merely saying a few words, we act in accordance with that, which includes making mistakes, picking ourselves up and acting again, varying behavior as necessary to find a path to fulfillment.
A conversation I’ve had is “How many people does it take to transform society?”
The answer I’ve generally come up with is two. It’s amazingly difficult to find two. Maybe that’s just my racket, but your story shows how two can sometimes find more, if more are required to realize a stand. Two is where it starts. At least one of the two must be willing to be at cause, and able to stand there.
Okay, it starts with a declaration, with an assumption of responsibility, with taking a stand, but creating structures for fulfillment, they are called, is something that is strengthened with practice.
Wow, way to miss the point and not respond to the argument—you know, the stuff that is not in parentheses.
(And anyway, how exactly am I supposed to give an example where AGI use is driven by economic pressures to surpass human performance, when AGI doesn’t yet exist?)
So, even though you didn’t clearly contest any of the premises nor the reasoning, let’s assume that the second paragraph is a rebuttal to premises (1:) and/or (2:) of the grandparent.
An AGI is not something a bunch of nerds can cook up in their basement in their spare time.
I contest this premise, and I’m really wondering where you’d think that up. As technology progresses, we’ve noticed that it gets easier and easier to do stuff that was previously only possible for massive organizations.
Examples include, well, anything involving computers (since computers were first something only massive organizations could possess, until a bunch of nerds cooked it up in their basement), creating new software in the first place, creating videogames, publishing original research, running automated data-miners, creating new hardware gadgets, creating software that emulates hardware devices, validating formal mathematical proofs, running computer simulations...
...I could probably go on for a while, but I’m being warned that this should be enough to point at the right empirical cluster. Basically, we have lots of evidence saying that new-stuff-that-can-only-be-done-by-large-organizations can eventually be done by smaller groups, and not much that sets AGI apart as a particular exception other than the current perceived level of difficulty.
If an AGI will be always less effective than its contemporary specialized AIs, people will be unwilling to put their money, time and effor in it.
I just pointed out how economic reasoning can justify an AGI which is outperformed at any specific task by a specialized-AI. I’m not even an economist and it’s a trivial argument, yet—there it is.
Even if one had a formal proof that AGIs must always be outperformed, that still would not show that AGIs will not be worth developing. You need a far more impressive argument covering all economic possibilities, especially since software & AI techniques are so economically valuable these days with no sign of interest letting up so handwaving arguments look implausible.
(I would be deeply amused to see a libertarian like Nick Szabo try to do such a thing because it runs so contrary to cherished libertarian beliefs about the value of local knowledge or the uselessness of elites or the weakness of theory, though I know he won’t.)
Some future technology we currently have no idea how to develop will do X (“A miracle occurs”).
Yeah, you treat the concept of new technologies (even though we experience new technologies every single year) on the same level as ‘miracles’ (which we’ve never experienced at all). I get that.
And I’ve seen lots of religious people argue thusly: “You believe in ‘electrons’ and ‘quarks’ that you’ve never seen with your own eyes, and I believe in angels and demons that I’ve never seen either. Therefore your ‘scientific’ ideas are just as faith-inspired as mine.”
If we’re to throw guilt-by-perceived association around, then I think that your criticism of LW-ideas are typically- religious. You’re following the typical argument of the religious, where you try to claim all belief in things unseen is equally reasonable, all expectations of the future are equally reasonable, and hence “see, you’re also a religion after all”.
I think I’ll have to revise my position—you are really not saying anything worth hearing.,.,
Maybe I’m attributing malice where a more likely explanation exists, but a policy which seems deliberately designed to incentivize groupthink appears to be more consistent with a cult rather than “a community blog devoted to refining the art of human rationality”.
A group of specialized AIs doesn’t need to have shared goals or shared representations of the world. A group of interacting specialized AIs would be certainly be a complex system that will likely exhibit unanticipated behavior, but this doesn’t mean that it will be an agent (an antropic model created in economy to model the behavior of humans).
I don’t think this is a meaningful reply, or perhaps it’s just question-begging.
If having a coherent goal is the point of the human in the loop, then you are quietly ignoring the hypothetical given that ‘every human skill has been transferred’ and your points are irrelevant. If having a coherent goal is not what the human is supposed to be doing, well, every agent can be considered to ‘exhibit unanticipated behavior’ from the point of view of its constituent parts (what human behavior would you anticipate from a single hair cell?), and it doesn’t matter what the behavior of the complex system is—just that there is behavior. We can even layer on evolutionary concerns here: these complex systems will be selected upon and only the ones that act like agents will survive and spread!
Assuming that AI technology will necessarily lead to a super-intelligent but essentially human-like mind is anthropomorphization in the same sense that the gods of traditional religions are anthropomorphizations of complex and poorly understood (at the time) phenomena such as the weather or biological cycles or ecology.
Yeah, whatever.
Arguing against ‘necessarily leading to a super-intelligent but essentially human-like mind’ is a big part of Eliezer and LW’s AI paradigm in general going back to the earliest writings & motivation for SIAI & LW, one of our perennial criticisms of mainstream SF, AI writers, and ‘machine ethics’ writers in particular, and a key reason for the perpetual interest by LWers in unusual models of intelligence like AIXI or in exotic kinds of decision theories.
If you’ve failed to realize this so profoundly that you can seriously write the above—accusing LW of naive religious-style anthropomorphizing! - all I can conclude is that you either are very dense or have not read much material.
Still, that doesn’t imply that AGI will be economically viable unless you show that humans will still be the limiting factor after every human skill that can be transfered to a specialized AI has been trasfered.
If every human skill has been transferred, including that of employing or combining specialized-AIs, then in what sense do the groups of specialized-AIs not then comprise an AGI?
This argument would seem to reduce you to confronting a dilemma: if every human skill has been transferred to specialized-AIs, then a complex of specialized-AIs by definition now forms an AGI which outperforms all humans; if not every human skill has been transferred, such as employing specialized-AIs, then there is the very large economic niche for AGIs which I have identified with my Amdahl’s law argument. So either there exist AGI which outperform all humans, or there exists economic pressure for AGI.
Oh. I thought it was just for replying to the comment which was negative. I guess this is what Wedrifid or whomever meant when they pointed out that the feature could strike in unexpected places...
I want someone to undo this part, if not the whole thing. Discouraging people from replying to people who are unpopular or wrong is bad. Preventing new users who are perceived as wrong from defending themselves is extremely bad.
If you don’t want to discourage replies to downvoted comments, then you want to undo the whole thing. That’s what this feature is for. It shouldn’t be doing anything else, and if it is then that’s a mistake that should be corrected.
Regardless of whether or not we should discourage replies to downvoted comments, we should avoid discouraging replies to the replies to downvoted comments. People who are downvoted should not be discouraged from speaking up about their ideas, even if those ideas are bad. That’s the way that those people go about improving.
Additionally, if they’re discouraged from defending their ideas in more detail or from addressing criticisms, but they actually happened to be correct or at least to make a good point, then discouraging them is an extremely bad idea.
Regardless of whether or not we should discourage replies to downvoted comments, we should avoid discouraging replies to the replies to downvoted comments.
“but still no AGI or commercial nuclear fusion, despite these having constantly been predicted to be in the next 25 years for the last 60 years.
Please clarify this plainly for me: Are you saying these technologies will NEVER be developed? Not in 25 year, nor in 100 years, nor in 500 years, nor in 10,000 years?
Is your whole disagreement a matter of timescales—whether it is likely to happen to happen within our lifetimes or not?
Because if so, then there are a lot of us here who likewise don’t expect to see AGI in our lifetimes.
If you’re not saying “It will NEVER happen” then please specify a date by which time you’d assigning Probability > 50% of these technologies to have happened.
But until then, again your whole argument seems to be “it hasn’t happened yet, so it will never happen.”
I’m a little confused by this post, in that it seems to be a little all over the place and comes off as a general “how things weren’t so good, but are better now and there’s a happy end” story. It says that it’s about agency, but several of the problems involved (e.g. there being two factions with contradictory goals in the original group, the fact that the purchasing negotiations were complex) have no obvious connection to people being agenty.
I also have only a rough guess of what’s meant by “agenty” in the first place, which might contribute to my confusion. I think this post could benefit if it explicitly gave a definition for the term and more clearly linked the various parts of the story with that definition. There are already some parts where the connection to agency is made quite clearly—e.g. “Today I know that if an agenty person has to write bylaws and they don’t have experience, they go off and read about how to write bylaws”—but they’re currently the exception rather than the norm.
I’m also not entirely convinced that “agency” is the best possible way of characterizing the events in question. For instance:
This sounds to me like “some of the people were more motivated by this goal than others” could be both a more accurate and a more useful description than “some were more agenty than others”. More useful in the sense that the first description allows for the possibility of asking questions like “Might these people work harder if they were more motivated? Could they be more motivated? Is this lack of motivation a sign of the fact that they’re not so strongly on board with this anyway, suggesting troubles later on?” and then taking different kinds of action based on the answers you get. The second description feels more likely to just make one shrug their shoulders and think “eh, they’re just not sufficiently agenty, that’s too bad”. Or to put it in other words, the “agent” characterization sounds like an error model of people rather than a bug model.
So I think that this post would also benefit from not just defining “agenty”, but also saying a few words about why we should expect this to be a useful and meaningful concept.
Hi Kaj,
Thanks for taking t he time to write out these thoughtful feedback/questions.
I needed an example to make my point, and the founding of Tortuga was the one I came up with. That particular story was all over the place and a mess, which is kind of the point. Real life is messy. The whole thing was a big mess, that Patri and I and group somehow managed to persist through and make work.
The ending I was shooting for was more appreciation of people like Patri, especially those in this community, and both inspiration and caution regarding agency. Its really really really hard, and some people do it. If you try it and you’re not used to it, you’ll probably fail immediately. This is to be expected, and if you really want to be an agent, you don’t give up and let that stop you, like it would for most people.
Yes, that was just an example of the stupid crap that came up in this particular case. How we dealt with it was agenty—we didn’t just let it destroy the project—Patri did research, I figured out how our case was like an example in his research, and he figured out a solution to the problem we identified. In most cases, when a group got stuck with something like two factions, it would simply fail, and that would be the end of the project.
Sorry about the lack of definition of agency—its a term used very frequently by Less Wrong types I hang out with, so I figured it was common and safe lingo to use. I should have known better since I also had someone ask in another post. Here’s my quick answer:
And here’s something from Lukeprog:
I don’t think I said anything about comparing agency, or that every single thing I wrote was specifically and directly about agency—you are arguing with a claim I didn’t make. Writing that was an attempt to show appreciation for people doing anything, since most people in my experience do absolutely nothing to make things happen outside of societal norms. Its frustrating that everyone doesn’t do more, but I do want to give at least some positive reinforcement for doing anything. If it hadn’t been for the people who came to meetings, nothing would have happened, in the same way that if Patri hadn’t been there, nothing would have happened. Its just that people coming to meetings happens much more often than Patris. Feel free to ask clarifying questions on this—I realize its not the most elegantly written, and I’m not quite sure how to get at exactly what you’re after.
People who are agenty are people who make shit happen. Amazing things don’t just happen by themselves. That’s why the world doesn’t function like how we can all imagine it doing in more ideal situations. To really make the world as awesome as it could be, we need more agents. And the agents we do have, are almost all struggling with the sort of problems that happened in the founding of Tortuga that I described. The problems are different in different situations, but generally, there is a very small number of people on any given project that are really thinking about it, and acting with intention, keeping the big picture in mind, and they have to manage everything and everyone else, and this is very challenging.
I feel like I should edit this more since these are such good questions, but unfortunately I don’t have the energy for it right now and am unlikely to in the near future. I hope this helps!
I think what Kaj is responding to, is that the post doesn’t have the abstract clarity of purpose of a typical post in the Main forum. It’s more of a personal history and a passionate exhortation to reward agency when it appears within the LW community. It’s a bit out of line for me to play LW front-page style-pundit, when I am mostly a careless creature of Discussion and have no ambition to shape LW’s appearance or editorial norms, and I even sort of like the essay as it is; but it probably does deserve a rewrite. (It’ll get twice as many upvotes if you do it really well.)
Thanks for explaining.
Its true, my writing is not as high quality as most of the top level posts. I’m not a professional writer at all. Although I did get someone good to edit this for me, so its much better than it would have been without that.
I don’t know of anyone who is a better writer than I am who understands and cares enough about this content enough to put it out there, so I did it myself. If you or anyone you know who is a better writer would like to do a rewrite, by all means, I would love for them to do it!
I don’t think it’s the general quality of your writing that’s causing problems; I think it’s a particular, specific flaw in this essay. Compare this comment thread to the one under ‘How To Deal With Depression’—there’s agreement and there’s disagreement, but unlike in this comment thread there’s no deep confusion about what your point is and how your essay supports it.
So what is that flaw? My theory is that ‘agentiness’ is psychological phlogiston, an imprecise non-explanation which should be purged from our collective vocabulary with great force. Taboo it, decompose it and retry.
If I’m right about the problem but wrong about the solution, my next best guess is that you’ve chosen too complicated an anecdote. I can see why you wouldn’t want to expand on the hospital story specifically, but something about that size might work better.
Hope this helps.
I agree that there isn’t a problem with Shannon’s prose. I thought agentiness was a clear concept, but I might be kidding myself.
In all seriousness, though, why bother? As long as there are colossi striding the world, what possible affect will us mere mortals have?
In general, agency provides its own rewards. I’m more curious what kind of teleological narrative us mere mortals can maintain, in the face of people who are simply objectively better than us at getting shit done no matter what?
What influence do average people have on anything that actually matters, compared to people like Patri or Eliezer?
As someone who has met Patri and Eliezer (and many other heroes besides), I can tell you this: they are men of flesh and blood, with their own insecurities and fears. And I can tell you that they cannot do it alone—why else would Patri have started the Seasteading Institute, or Eliezer Less Wrong? They have both put significant labor into building communities, support networks, and organizations because they need the help of ‘average people’.
They are impressive. Let’s strive to emulate their best qualities. But to the extent that we wait in the shadows, waiting for for them to fix the world for us, we also sabotage their efforts. They need us. They need you.
I’d also recommend you take a look at this diagram.
That assumes that the individual is in control of their own mindset.
Mindsets arise through an interaction of the individual and their environment. The individual’s social environment, in particular, plays a strong role in determining one’s view of challenges and opportunities, of flaws and capabilities, and of agency and fate.
In the absence of warmth, sunlight, nutrients and water, a seed will not grow, even if it is (genetically) a perfectly formed and hardy seed. In the absence of resources and adequately-scaled challenges, a mind will not flourish, even if it is (genetically) a perfectly formed and clever mind.
You sound like you’re making excuses for not trying to do things. It seems like you’re trying to defend your belief that you’re incapable, because admitting that you don’t have to be would mean you’d a) have to do something difficult like try things, b) have to face the potential for failure, and c) have to admit that you’ve been wasting your time working on things that don’t matter as much as what you could be working on.
Secondly—Less Wrong isn’t the worst environment for nurturing your mindset. For all the inaction we have around here, we at least have some pretty good memes (see the Challenging the Difficult sequence).
Anyway—I think you’ll improve your mindset as soon as you want to. I’m going to get back to trying to help.
I believe Ialdabaoth is referring to other environmental factors, not Lesswrong.
My sense was that he was discussing one’s ‘environment in general’, and I was recommending thinking of LW as part of his environment, since it has some good memes. I wasn’t trying to correct a misunderstanding of LW, but rather encourage him to absorb good memes from LW.
Colossi are better at getting shit done when surrounded by a legion of supporters, than when alone. Any given member of that legion may be interchangeable or even ultimately dispensable, but each has a marginal contribution to make.
True. I guess my own personal narrative has taught me to be extremely distrustful of any role where I am ultimately dispensable and interchangeable—I’m tired of being reassigned to bus-axle greasing duties while the bus is still rolling.
I think the idea of near mode and far mode might be useful in formulating a definition. Something like, “a person is like an agent to the extent that they consistently and intelligently work towards accomplishing far-mode goals”.
Also, http://www.paulgraham.com/relres.html
I find the following emotions are often associated with me being insufficiently “agenty”:
Wistfulness. An example.
Exasperation. (Especially exasperation that someone else is not being agenty, about something that I could just as easily take over and get done myself.)
Ugh-field sensations.
Creative “stuckness”. Talking to a beta reader almost always clears up this problem for me inside of fifteen to twenty minutes even if the beta doesn’t actually have anything to say, and I still don’t instantly grab one and start yammering when I feel it.
Non-strategic “but I don’t know how to do X!” (This is sometimes useful strategically, though.)
I’m insufficiently “agenty” in the following situations:
-When my working memory gets too full, i.e. when I’m doing a clinical at the hospital and my mental “to-do” list gets too long, I stop caring whether I’ll get everything done, how I’ll get everything done, whether I know how to do everything I’m supposed to do, etc. I then become a little obedient puppy who follows people around waiting until they tell me to do things.
-Whenever my mental response to a situation is “are you serious?”, my actual response is likely to be less than enthusiastic.
-Feeling embarrassed because of someone else’s behaviour-similar to exasperation. I don’t know if this is a conditioned response to not getting along with fellow members of group projects, but whenever I’m watching someone else struggle because they’re unprepared, say/do something stupid, etc, my motivation to make an effort drops to zero.
By a 5 second estimate this might be able to interact usefully with some of those phone apps that ask you aabut stuff at random intervals. At random every few minutes, ask if you’re feeling any of those things, have you click “yes” or “no”, and if yes it prompts with standard response you should have.
Often, if people fail to be agenty in the context that you are interested in, they are simply saving their energies to be agenty in another.
As to 95% failure rate: It might have been for the best that many of those projects did not proceed. Just because you’ve started something doesn’t mean you should finish it.
Do you have any particular reason to believe that most people are really living at ideal agency levels and simply “saving” it for the things that matter? I’m pretty positive I’m both distinctly above-average in agency (judged by largely having a successful life and resolving my complaints with it over time), and still have fairly severe failures.
At least for me, “I don’t have the energy to accomplish X now, but I’ll put it on my list for when I can” and “I didn’t realize I could accomplish X” are very different states, and it seems like he average person has only a minimal sense of the former.
Even the few people who show agency on any project whatsoever are non-agenty on most projects whose goals they support.
I don’t think it’s just people being stupid. I mean, if I try to become a PC, I will die. (I will abandon my support structures, become unable to sustain effort before building new ones, get horribly sick, and die.) Many people have big losses on failure (internal or external), like responsibility to family.
Still, since you’re a PC who knows lots of PCs: how do people, in practice, go about things like “At the age of 12, Badass Example left her village in East Timor to join the French Resistance. After assassinating Hitler, she founded the Donner Party and was elected President of Belgium.”? I don’t think you can just look up “How to locate and join a secret group” on eHow.
I can understand your concern, given how you are seeing PC. PC does not mean you have to do any specific thing. So it by no means includes a definition that you should abandon your support structures and all that. To me, a PC is someone who makes conscious choices to honor the values they most care about. They tend to see novel solutions to problems because they are willing to consider anything that will solve the problem. They do tend to be less risk averse, but ideally they are not stupid about it :)
Its a well known bias that people are naturally much more motivated by fear of pain the seeking pleasure (I’ve heard figures of 2 or 3 to 1 pain avoidance v.s. pleasure seeking). This is not how I want to live my life, so have taken steps to correct my psychology for this, to optimize for maximal utility over minimal suffering.
As far as how to do this, there are a lot of personal growth gurus out there happy to teach you things. Landmark is the cheap version that is everywhere in the US, and I can recommend several people in California, New York, and Canada who I know to be especially good, depending on what specialty you are most interested in, and what your tolerance for woo is. A lot of LWers are very intolerant of woo, which in my opinion is throwing the baby out with the bathwater since I think that community does provide a lot of genuine value, so YYMV.
Woo has been renamed to pitches, noting for posterity. Easy enough to google; then again so is gur onfvyvfx yet everyone treats it as a big secret.
Something’s not getting through. I know you understand how depression works so I’m sort of at a loss here. I don’t think I have any options other than “Never do anything out of the ordinary” and “Bite off more than I can chew, then jump ship when everything comes crashing down, abandoning any people I was supposed to work with, and neglect everything while recovering from exhaustion”.
Your vision of being PC is different than mine—we have very different basic assumptions. I don’t think that there is anything in particular required, other than making conscious choices. So there’s nothing requiring you to bite off more than you can chew or to abandon anyone. I would recommend switching to a more PC mode to be done in small steps for most people in most situations. Just try to change one mental habit at a time at first. Pick the lowest hanging fruit. Talking to some sort of coach is very helpful if you can afford it, for help with deciding what to prioritize and having accountability. Does that help?
Not in the least. The only way I can interpret your “anything in particular required, other than making conscious choices” is adding “and I consciously choose to do so” after “I’m sick like a dog, there’s no way I’m going to class, or doing anything more tiring that collapse back into bed with Downton Abbey fanfic”. Can I have an example, preferably of a very small step?
I’m not quite sure I’m parsing what you’re saying correctly. I’ll give it a try. I would say that if you are genuinely sick, making a conscious choice regarding that would often be to do what is required to recover quickly, so going to bed is quite reasonable. Other agenty things to do about sickness would be to take vitamin D or other remedies that you’ve determined will help you overthrow the cold faster. I also consider whether or not to take Dayquil or Nyquil, since even though my understanding is that they don’t actually help you get over the cold faster, they do often help with work and sleep, so I actively consider whether it is best to be more highly functioning while sick v.s. focusing on speed of recovery to make this choice.
There was a time when I had major surgery, for stage 4 endometriosis, and wanted to go to a wedding precisely a week afterward, when I was told recovery was 2-3 weeks. I was told that I probably shouldn’t fly but sometimes people recover early enough. So what I did in that case, was to focus on recovery as hard as I could for the five days after the surgery. When the nurse asked me if I wanted to go home from the hospital as soon as possible, I said that I wanted to stay right where I was for the full time I was allowed that was prepaid for my stay before having to spend the night. Why would I cause extra trauma to my body while it was in early stages of recovery, just to be in a familiar environment. Then, after the first 24hrs where I slept as much as possible but did some walking around every hour to be careful about blood clots, for the next four days I hardly got out of bed and took sleeping pills to encourage myself to stay in bed. I did this until it was time to pack, at which I got up, packed, got on a plane, and was actually recovered enough by the wedding to be able to dance!
That is an example of agentiness as I see it, because I was working within the constraints I had, and actively thinking about and doing what would cause me to recover most quickly because of my goal of making it to the wedding.
This sounds like a great example of many small things that one does get better at after some training in instrumental rationality.
Your surgery recovery example is weird, because (as you describe it) the nurse came to you and asked you to make a choice with well-defined options (any length of time between “as soon as possible” and “as late as is paid for”) with consequences that were already salient to you. That’s more agenty than “go with whatever the nurse suggests”, but I think most of us can make choices when handed a menu.
Let me take a very stupid example. You want some bylaws written. You look up “how to write bylaws”, and notice there’s a lot to it. You estimate you’ll become able to write bylaws after 200 hours of research. The options that immediately occur to you are:
Research bylaws as much as you are physically able to. This leads to about six hours of learning about bylaws, followed by a daze where you read the same sentence over and over again for ten hours until you can drag yourself into bed, followed by a few entirely unproductive days.
Research bylaws for a couple hours each day, then walk away while you are still fresh. This requires 100 productive days, which, adding days where you have to do something more important and entirely unproductive days, represents about a year. A year later, Patri has written perfectly good bylaws and has started looking for housing and your knowledge is useless.
Chuck non-vital projects like “write bylaws” and focus entirely on becoming more productive. Ten years later, wonder why you haven’t done anything with your life. Drown your crushing sense of failure in whichever drug you determine costs the fewest QALYs.
Think until you find a better option.
Your mental fog is too heavy to decide, so you stretch and stagger into the kitchen for a drink of water. The light bulb needs changed, but your knee and balance are acting up so you save that for later. After a drink, a light snack, and a few minutes forcing yourself into motion, you manage to get yourself to shower. Afterwards, you feel able to think clearly.
What do you do?
I actually find examples like the surgery thing quite frequently in life—the most unusual thing about it may be the way I framed it. I notice options and possibilities and win/win scenarios for making unusual agreements where most people don’t.
With the hospital example, I think the nurse just asked me if I wanted to go home, as opposed to giving me a list of options and implications, although I do not have a recording of the conversation.
Regarding more complex examples, depends on things like opportunity cost. One of the first things I would do would be to discuss with Patri and other agents in the group. When you have multiple agents, you can optimize among everyone’s good ideas, and if you cooperate, you don’t end up with situations like case #2 where Patri and I duplicate work.
I’ve done stuff like that plenty of times. Sometimes I’ve even done the reverse (stuff myself with medicines and stuff this afternoon so that I’ll be fine for the party tonight, even if that means I’d likely be very sick tomorrow and the day after, when I wouldn’t have much to do anyway so I wouldn’t mind staying in bed that much).
Perhaps I missed some previous required-reading, but… what exactly do you mean by “agenty”, “agentiness”, etc.?
(Also, what does “PC” refer to in this context?)
Edit: This?
She means being proactive. If you want something to happen, you do it, or you make it happen however you can.
For example, there’s a local meetup group that we’ve had a couple meetings of—but we don’t have a schedule, we only have meetings when someone is proactive—they say “I’ve found the meeting place, the activity, I’ve emailed the people, come by, I will certainly be there, I will make sure we have a good time no matter how many people show up.” And then we have a meetup.
I’m reminded of the line from The Caine Mutiny that naval ships were designed by geniuses to be run by idiots. If you want to do something, you can either design the ship, or you can just run an existing ship. If we had schedules and a book full of activities and a large group on a mailing list, someone with all the proactiveness of a bag of rocks could make meetups happen. But we don’t have those things, so nothing happens unless someone is feeling “agent-ey.”
This suggests that the way to systematically make things happen is not to organize meetings, but to put in place such a system (schedules, book full of etc.) for organizing meetings. Otherwise you need someone to feel “agent-ey” every time; that doesn’t seem sustainable.
Edit: That is, if you’re a person in such a group and you’re feeling “agent-ey”, Manfred’s comment suggests that your efforts would be better spent putting a system in place that would allow things to happen without any agentness involved, as opposed to putting forth the effort to make a thing happen this one time. I’m not sure if my experience supports this; I’ll have to think about it.
The big choke-point then being item #3 - getting a large group :P
This mirrors my own experience—the way I’ve found to have the most influence, and get the most done is often not being the one completing the tasks, but rather the one creating and documenting the process/procedures, and teaching and training other people to do the work.
It’s also far more lucrative from a career standpoint! :D
PC refers to “player character.” In many games, there would be many characters, most of which don’t have goals and function primarily as scenery, and PCs, who both have goals and move heaven and earth to achieve those goals.
As for “agentiness,” I think a similar term is executive-nature. They’re an entity that can be well modeled by having goals, planning to achieve those goals, and achieving those goals. Many people just react to life; agents act.
Ah, thank you. I’m quite familiar with the term, and with the PC/NPC distinction in games; I just didn’t make the connection in this context. So the idea here is that most people don’t have goals? Or have goals, but don’t act to further them?
See “Humans are not automatically strategic”.
PC refers to “player character”: http://meaningandmagic.com/pc-laws-of-life/
Would you consider removing the last paragraph of this post? It reads like an overt “look at all the high-status people I know! I’m high-status too!” bid and jars with the rest of the post, which is significantly better.
I disagree. It also provides several other examples for those (like Kaj_Sotala) who didn’t find the post’s example of agency sufficient.
Those examples are not descriptive of how agency is hard. They don’t bolster the strength of the post.
Hypothesis: One reason there aren’t many agenty people is that a lot of parents find that agenty children are more of a challenge to their authority than they want and/or take more resources of various sorts than they feel they can afford.
My feeling about this is: Look at animals. They pretty much just do random stuff: hang out here, hang out there, eat when they’re hungry, etc. It’d be kind of surprising if evolution had taken us from that to industrial strength ass-kicking fast enough for civilization not to have formed in between.
(Additionally, there’s the whole near/far idea, that the intelligent parts of our brains aren’t supposed to be controlling our behavior, really.)
That sounds like the behavior of a bored domestic cat, or a bear in a mediocre zoo. Wild animals, especially clever, complex-brained ones can get up to some abstract or spontaneous stuff. Elephants hold funeral vigils for their dead (and at least sometimes for dead humans as well); orcas hunting seals will play sadistically with a captured pup for minutes or hours before getting down to the business of feeding; echidnas will go to incredible lengths to explore something novel, even after ascertaining it has no chance of providing them with food. There’s one recent funny case of an Antarctic leopard seal attempting to teach a human scuba diver how to kill penguins (in much the same way cats sometimes catch mice and leave them for their house-humans). When you add in complex social interactions, animal behavior (particularly that of any species prone to human-noticeable levels of personality variation) is quite dynamic—and even a lot of what looks like superficially simple behavior is the product of low-level drives common to many organisms being acted out in unique ways by the individual critter.
Thanks for the info. My impression was that emotions are like different modes that an animal can operate under, and you switch modes in a kind of haphazard way based on social and environmental cues, energy level, etc. Does that sound more or less accurate?
Has coherent goal-directed behavior spanning multiple days been observed in animals?
Well, that’s sufficiently vaguely-phrased that just something like a pack of wolves or orcas pursuing their quarry for days, which does happen, would seem to qualify. Or the bird building a nest as described below.
FWIW, pregnant African elephants often find a good time and place to give birth around the end of their term and then consume the leaves of a certain tree to induce labor (humans in the area use it for the same purpose). The pregnancy takes over a year and the labor itself, once begun, can take several days.
Something as simple as a bird building a nest would seem to meet that criterion.
Economic reasoning cuts many ways. Consider the trivial point known as Amdahl’s law: speedups are always limited by the slowest serial component. (I’ve pointed this out before but less explicitly.)
Humans do not increase their speed even if specialized AIs are increasing their speed arbitrarily. Therefore, a human+specialized-AI system’s performance asymptotically approaches the limit where the specialized-AI part takes zero time and the human part takes 100% of the time. The moment an AGI even slightly outperforms a human at using the specialized-AI, the same economic reasons you were counting on as your salvation suddenly turn on you and drive the replacement of any humans in the loop.
Since humans are a known fixed quantity, if an AGI can be improved—even if at all times it is strictly inferior to a specialized AI at the latter’s specialization—then eventually an AGI+specialized-AI system will outperform a human+specialized-AI system barring exotic unproven assumptions about asymptotic limits.
(What human is in the loop on high frequency trading? Who was in the loop when Knight Capital’s market maker was losing hundreds of millions of dollars? The answer is that no one was in the loop because humans in the loop would not have been economically competitive. That’s fine when it’s ‘just’ hundreds of millions of dollars at stake and companies can decide to take the risk for themselves or not—but the stakes can change, externalities can increase.)
You’re referring the Turing Test as a criterion and accusing us of anthroporphizing AI?
I think that an AI might become intelligent enough to destroy the human species, and still not be able to pass the Turing Test. Same way that we don’t need to mimic whales or apes before being able to kill them.
It’s not us who are anthroporphizing AI, it’s you who’re antroporphizing “intelligence that rivals and eventually surpasses the human intelligence both in magnitude and scope”.
Looks like you’ve completely missed the point of SIAI and massively misunderstand AI theory.
It seems to me like you have not even remotely the right order of magnitude of an idea of just how immense the laziness of some programmers can get. And the lazier programmers get, the more they try to write programs that do all their own work for them.
The ultimate achievement of the lazy programmer is to write a one-time program that will anticipate future needs to write programs, and write programs that can better anticipate such future needs and thus better write programs that meet the need, ad infinitum, without any further intervention from said programmer.
SIAI actually agrees that the above is probably not the most economically sensible thing to do and that it is not what most AIs, or even AGIs, developed in the near future will look like. However, SIAI is also aware that some people will, despite this, still want to be the ultimate lazy programmer and write the ultimate recursively self-modifying AI. No reasonable amount of reasonable arguments will change this fact.
Therefore, something must be done to prevent those AIs they will create from exterminating us. SIAI, in no small part through the work of Yudkowsky, has concluded that the best method of achieving this is currently through FAI research, and that eventually the only solution might be to make a Friendly self-modifying AGI before anyone else makes a non-Friendly one, so that the FAI has an unfair advantage and can outsmart any future non-friendly AGIs.
If you want to avoid being logically rude, you will contest either the premises (1: Some people will attempt to make the ultimate AGI. 2: One of them will eventually succeed. 3: Ultimate AGIs are Accidentally Unfriendly by default.) or some element in the chain of reasoning above. If you fail to do so, then the grandparent comment is understating how much you’re missing the point and sidetracking the discussion.
I think there’s a bit of a chicken-and-egg problem when you’re not much of an agent yet, and you haven’t accomplished anything interesting under your own steam, so it doesn’t really even seem worthwhile to plan anything out. (Another failure mode: it does seem worthwhile to plan things out, but only because you haven’t yet noticed that you rarely work on any of your plans.) Probably it makes sense to debug what’s making you ineffective and build up a track record before tackling anything really big (see success spirals).
If you’re one of those people who makes plans but never works on them, it might be a good idea to start being very distrustful of yourself whenever you say you’re going to do something. Concrete example: Maybe there’s some topic you’ve been intending to study for a while. If you were distrustful of yourself, instead of just continuing to intend to study it, you might block out some time during your week, then set up reminders on your cell phone, rules for when you can skip your study period, rules for when you’re allowed to abandon the project entirely, etc.
The problem with these behavior regulation devices is that building them takes a large activation energy. Here are my solutions for this so far:
Jot down ideas for a device whenever you have them, even if you aren’t going to implement them immediately.
Wait until you’re in an especially energetic or inspired mood, then take advantage of it and implement a few devices (or debug ones that failed).
Have the devices come in to effect a while after you’ve finished building them (ex: build your device in the evening and have it activate the next morning).
Consume stimulants like coffee or kratom.
In my experience, after using such devices for a while I no longer needed them as much.
Of course, there are other components to being an agent, e.g. fearlessness. The modern world is pretty safe, but evolution calibrated us for a world that was much more dangerous, especially with regard to social blunders.
I’m wondering if what you call agency and what EY calls PC (or, in extreme/fictional cases, heroic responsibility) is what the rest of the (English-speaking) world calls initiative/motivation/perseverance?
I don’t think it’s quite the same thing; nor do I think your three choices are synonymous or mean the same things.
EY used to use a term ‘anti-sphexishness’, based on Hofstadter’s description of a sphex wasp in GEB who executes its nesting program endlessly if someone messes with it, which seems to be synonymous with ‘PC’ or ‘heroic responsibility’, but which one would certainly not describe as ‘motivation’ or ‘perseverance’ - after all, the sphex wasp endlessly executing its program displays motivation and perseverance beyond any mere human! (Motivation and perseverance beyond that, in fact, of the biologists who was messing with it in Hofstadter’s description.)
Or take this example: an Asian kid is told by his parents to become a doctor, and after endless studying gets into med school, graduates, does his internship etc and becomes a full-fledged doctor. As expected, such things correlate with Conscientiousness, the doctor could fairly be described as having ‘motivation’ and ‘perseverance’ - but did he display ‘initiative’?
Similarly we can think of examples of people who display ‘initiative’ and ‘motivation’ but not perseverance (think ADHD) while also not especially being ‘PC’ - they follow their whim in choosing topics of interest (initiative, because certainly no one told them to pick said topics), and they prosecute said topic with great energy and intensity (motivation), but this leads to no lasting change and represents no deep thought about their goals, preferences, and the state of the world.
I agree, I meant / as +. Since when is division not the same thing as addition… What else would one add to the mix to get the meaning right?
I’m not sure. The meaning of PC/anti-sphexishness/heroic-responsibility seems to be a sort of stepping outside of routine and comparing the status quo with the original intrinsicly desirable goals the status quo was supposed to achieve, and taking action to remedy the discrepancies.
You could call this ‘visionary’ or ‘philosophical’ or ‘righteous’ or ‘wise’ but none of them seem right to me—probably why those 3 terms were invented. ‘Enlightened’ comes close but only if you were living 2 centuries ago, because these days ‘enlightened’ is sarcastic or religious in undertones. (That is, figures like Voltaire were ‘enlightened’ but also definitely reminiscent of the 3 terms.)
Right after WWII John Holt was finishing out his U.S. Navy tour of duty on the west coast. In Never Too Late he tells this story about his favorite band at the time, the Woody Herman Herd, who he had listened to only on records:
(Luckily he did manage to schedule his trip back east so that he crossed paths with the Herd in Chicago.)
As you say, getting things to happen takes a lot of time and effort, which is one of the things I learned when working on group projects in school. I think that most people, when they realize just how much effort it takes to Get Stuff Done, usually end up saying “screw it, there are other things I’d rather be doing” and go on doing whatever it is that they were doing before.
Thing is, getting things to happen doesn’t actually take a LOT of time and effort (depending on what you’re trying to make happen.). The difference between something happening and not can be as simple as making a facebook event and inviting a bunch of people. The key is that someone has to take responsibility, however little or however much effort that is, to say “I will be the one to try and make this happen” instead of saying “Man this should really be a thing”
It’s the old “herding cats” problem—finding a time and place that you can get enough people to show up at is hard.
Never ask. Asking a bunch of people about that will end up taking forever and have no conclusive answer. Schedule events when convenient for what you think will be multiple people sometime in advance, but with a concrete time and place. People will happily give you input after you do this and you can change it later, but this is much more reliable.
The beauty of defaults: the default to an open question is to ignore or not act, the default to an announcement of a date is to think about it.
I think the biggest problem with group projects in school is that a) the people you’re working with aren’t pre-filtered for motivation, and b) you’re working on toy problems that, likely, no one in the group would bother to do if it weren’t assigned. Some people want to get 90% because they have a scholarship and need to maintain a certain GPA, but some people are happy just to pass and want to do the minimum of work, and pretty much everyone has the attitude that “it’s just school”.
I’ve had group projects break down both because I was the most motivated person in the group (and lacked the leadership/interpersonal skills to deal with this), and because I was the least motivated person in the group (I’m somewhat spoiled and used to getting As without putting in too much time, and I was not prepared to have 3-hour group meetings starting at 8 pm after class.)
In general, I would expect group projects to run more smoothly in the workplace, both because people are more motivated–they’re in their chosen field, they’re getting paid, they’re working on a real world problem, etc–and because the process of getting hired filters for interpersonal skills, which isn’t the case for getting accepted into college or university, so you’re less likely to end up with people who can’t work with other people.
Grade school didn’t assign me many group projects. In fact, I can only remember one. And on that one I think I tried cooperating for like 15 minutes or something and then told the other two kids, “Go away, I’ll handle this” because it was easier to just do all the work myself.
Sometimes our early life experiences really are that metaphorical.
Similar, possibly relevant anecdote from my own life:
In a recent computer science class I took, we (the entire class as a whole) were assigned a group project. We were split up into about 4 sub-groups of about 6 people each; each sub-group was assigned a part of the project. I was on the team that was responsible for drawing up specifications, coordinating the other groups, testing the parts, and assembling them into a whole.
Like Eliezer, I quickly realized that I could just write the whole thing myself (it was a little toy C++ program). And I did (it took maybe a couple of days). However, the professor was (of course) not willing to simply let me submit a complete project which I had written in its entirety; and since the groups were separated, there was no way for me to submit my work in a way that would plausibly let me claim that any sort of cooperation had taken place.
So I had to spend the rest of the semester trying to get the other groups to independently write the code that I had already written, trying to get them to see that the solutions I’d come up with were in fact working ones, and generally having conversations like the following:
Clueless Classmate: How should we do this? Perhaps ?
SaidAchmiz: Mmm… perhaps we might instead try .
CC: That doesn’t make any sense and will never work!
SA: Sigh.
I’m not quite sure what this could be a metaphor for, but it certainly felt rather metaphorical at the time...
It sounds like a metaphor for “what you need to learn to handle effectively in order to succeed in a typical workplace”. Good luck!
My question is: are there some straightforward heuristics one can apply to find/select a workplace where such things occur as little as possible? At what kinds of places can one expect more of this, and at what kinds less? The effort to find a workplace where you do NOT have to handle such situations seems like it would be more effective in the long run (edit: that is, more effective in achieving happiness/sanity/job satisfaction) than learning to deal with said situations (though of course those things are not mutually exclusive!).
Yes, and it is an extremely high expected-value decision to actively seek out people who understand which workplaces are likely to be most suitable according to this and other important metrics.
And sometimes they aren’t. As a ‘grown up’ you outright founded (with assistance) an organisation to help you handle your new project as well as playing a pivotal role in forming a community around a relevant area of interest. Congratulations are in order for learning to transcend the “just do myself” instinct—at least for the big things, when it matters.
It seems to me that “selecting people with whom I can usefully cooperate” is different from “learning to cooperate with arbitrarily assigned people”. Do you think that captures the distinction between Eliezer’s grade school anecdote and his later successes, or is that not a meaningful difference?
It is certainly a significant factor. (Not the only one. Eliezer is also wiser as an adult than he was as a child.)
Math over metaphor. This is a common experience. Assume a child 99th percentile of age group by intelligence
Elementary school is assigned by geography, so average intelligence is 50th percentile (or fairly close)
By middle school there may not be general tracking for all students, but very low performers have been tracked off (say 20% of students), so average intelligence is 60th percentile
In high school they often have a high and a low track, split the students 50⁄50 and you get average intelligence is 80th percentile
If the student then goes to a college that rejects 90% of applicants (or gets work in a similar selective profession) average intelligence is 98th percentile
and all of a sudden the student is now well socialized and has learned the important skill of cooperating with their peers.
EDIT: the above holds if you track “effectiveness” which is some combination of conscientiousness and intelligence instead of intelligence. In practice I expect most tracking systems capture quite a bit of conscientiousness, but the above reads more cleanly with intelligence in each line than “some combination of conscientiousness and intelligence”
This is a bit off-topic, but I think that the word “competence” effectively conveys the meaning of, “some combination of conscientiousness and intelligence”.
I was generally lucky with group project partners at school (both high school and college), I guess; I didn’t have much explicit conflict, and I never had one actually fall apart. There was one class in which I was in a group of three in which I did 95% of the work, one of the other two people did about 60% of the work, and the third guy basically didn’t show up, but I was okay with that. I wrote code that worked, the second guy supported me so I could write that code, and the third guy would have only gotten in the way anyway.
Edit: (Yes, that adds up to more than 100%, because of duplication of effort, inefficiency, correcting each other’s mistakes, etc. In other words, it’s a joke, along the lines of “First you do the first 80%, then you do the second 80%.”)
...?
I interpreted that as “I did 95% of the work that I said I would do, and one of the other partners did 60% of the work that he committed to do, and the third partner didn’t do anything.” But yeah, if you interpret it as straight-up percentages, it doesn’t really add up...
I read it as CronoDAS implying his group did 55% more work than it needed to, but your interpretation makes as much sense.
I could’ve sworn there was a bit in the grandparent that acknowledged this apparent contradiction, and pointed out pair programming as the explanation, but it seems to be gone. That would, in any case, account for the percentages.
Yeah, I don’t know what happened to that paragraph.
Out of curiosity, am I the only one who had experienced at least some instances of productive group work in high school and college ? I am not nearly as smart as most people here, so perhaps that fact played a role, since I actually needed the cooperation of other people in order to get the job done.
You are affirming the consequent and also overgeneralizing.
I argued that ‘some economically valuable uses of AGI are replacing humans’ (disproving Szabo’s core argument that “AGI can always be outperformed at a specific task by a specialized-AI, therefore, there are no economically valuable uses of AGI”).
That is not the same thing as ‘all replacements of humans are economically valuable uses of AGI’ for which ‘non-AGI HFTs replacing humans’ serves as a disproof (but so would cars or machines, for that matter).
Good strawmanning. Very subtle.
That’s the key part. The specialized High-freq-trading software won’t be replaced, but the humans who use that specialized software will eventually be replaced if someone figures out how to make an AGI that can think about all the relevant variables and can be scaled to go faster and better than a human.
Never heard of mayors and judges and ship captains officiating marriages?
ETA: And also James Randi of the JREF has also officiated at a wedding
A marriage ceremony, officiated by an available elder of some sort. Name changes. Wow, what sort of crazy culture or subculture would do that kind of thing? Oh, right. Most of them, in some shape or form. This was actually a hat tip towards normality, not the reverse. Like celebrating Christmas with family and friends without actually believing in a Christ.
Your ranting is nonsense. Get some perspective.
V_V’s comments do serve as datapoints towards what elements can look cultish to outsiders even though, I agree with you,such a thing would be unfair as pretty much every community does these things.
I do not model V_V as someone whose utterances can be considered representative of outsiders.
V_V’s utterances about what looks cultish are generally useless in regards to talking about ideas: trying to shame us into not having certain ideas, just because they look bad is rather a circular and useless argument. (and frankly “transhumanism” “cryonics”, “AI apocalypse” bring to mind the low status of an SF geek, not the low status of a cult, so V_V’s words miss doubly the mark in this respect)
On the other hand practices like marriage ceremonies and cohabitations pattern-match more, and so it’s something to be careful about from a Public Relations perspective. But it’s not as if I’m sure whether they’re a net positive or a net negative all things considered; so consider my words to be hesitant and uncertain, not really sharing into V_V’s criticism...
Pattern matching and public relations are both interesting and important and using V_V as an outsider datapoint while doing so would produce unreliable results.
I have to disagree here. Even if from the outside view christian marriage or whatever is equally as weird as yudkowskian marriage, it definitely feels cultish to me and I’m an atheist. The normal way to get married is NOT by a friend of yours whose teachings you follow.
Errh. What is the normal way to get married then, from your view? Mail a letter to the nearest municipal or judicial office?
“Getting married”, once shed of all religious connotations and other nasty bits, is a social contract before witnesses published so that: 1) The spouses are more motivated to cooperate and remain at a high level of mutual affection. 2) Individuals not part of the marriage (i.e. everyone else) are aware that these spouses are “together” presumably for a long time and that they should not get in their way and they are not “available”.
That’s the way I see it / was taught, anyway.
In a church, with two families present, by a priest. Just because it’s nonsense doesn’t make it not normal.
I don’t think you’re using the right reference class for the question. If we’re talking about the set of people who might find Less Wrong interesting, I predict that most of them would find it more weird if two atheists from atheist families got married by a priest than if they got married by the head of an Internet community. (Most normal for that reference class is picking a celebrant who’s just a friend, or a Unitarian minister, or a comedian, etc.)
I’ve got a number of friends in non-SingInst/LW circles who’ve been married in public ceremonies overseen by friends whom they consider wise, or instrumental in their social groups, or simply good speakers. I don’t have any actual data, but in the circles I run in it seems like one of the more popular secular options.
First, I’d like to ask why you didn’t reply directly to my previous comment, and instead started an entirely separate top-level comment. I hope your motive wasn’t less than honorable, like hoping that I wouldn’t notice and people would infer that I was tacitly admitting I was refuted? Hopefully I’m just being paranoid and you were careless about posting your comment or something.
The claim “AGI will be more effective at replacing humans in using specialized-AIs” was assumed in my argument, and also not criticized by Szabo, who thinks his argument works even granting the existence of such AGI:
Great piece, Shannon. Brings to mind a couple of things.
What you call “agency” is, in Landmartian, “being cause in the matter,” being “at cause,” “taking a stand,” and acting “consistently with that stand.”
This is distinguished from being caught in a “racket,” defined as a persistent complaint combined with a fixed way of being. Someone caught in a racket does not take responsibility for things as they are, but rather sets up stories that express being a victim of circumstances or others. The generic alternative is to accept responsibility, as a stand, not as a “truth.”
That’s been oft-misunderstood. I am responsible for, say, the WTC attack, as a stand, not as a fact. If I’m responsible, it means that I can look at my life as missing something that might make a difference, as full of possibilities.
In any case, most people, most of the time, are not at cause, we are simply reacting.
Then, if we actually take responsibility, beyond merely saying a few words, we act in accordance with that, which includes making mistakes, picking ourselves up and acting again, varying behavior as necessary to find a path to fulfillment.
A conversation I’ve had is “How many people does it take to transform society?”
The answer I’ve generally come up with is two. It’s amazingly difficult to find two. Maybe that’s just my racket, but your story shows how two can sometimes find more, if more are required to realize a stand. Two is where it starts. At least one of the two must be willing to be at cause, and able to stand there.
I think that being “agenty” includes being good at making the sort of changes you want as well as working on making those changes.
Okay, it starts with a declaration, with an assumption of responsibility, with taking a stand, but creating structures for fulfillment, they are called, is something that is strengthened with practice.
Took me a while to sort out the background for this. I take it your “Landmartian” indicates the parlance of Landmark Education?
Yes. I made that up, but Landmartians immediately recognize it.
Wow, way to miss the point and not respond to the argument—you know, the stuff that is not in parentheses.
(And anyway, how exactly am I supposed to give an example where AGI use is driven by economic pressures to surpass human performance, when AGI doesn’t yet exist?)
So, even though you didn’t clearly contest any of the premises nor the reasoning, let’s assume that the second paragraph is a rebuttal to premises (1:) and/or (2:) of the grandparent.
I contest this premise, and I’m really wondering where you’d think that up. As technology progresses, we’ve noticed that it gets easier and easier to do stuff that was previously only possible for massive organizations.
Examples include, well, anything involving computers (since computers were first something only massive organizations could possess, until a bunch of nerds cooked it up in their basement), creating new software in the first place, creating videogames, publishing original research, running automated data-miners, creating new hardware gadgets, creating software that emulates hardware devices, validating formal mathematical proofs, running computer simulations...
...I could probably go on for a while, but I’m being warned that this should be enough to point at the right empirical cluster. Basically, we have lots of evidence saying that new-stuff-that-can-only-be-done-by-large-organizations can eventually be done by smaller groups, and not much that sets AGI apart as a particular exception other than the current perceived level of difficulty.
I just pointed out how economic reasoning can justify an AGI which is outperformed at any specific task by a specialized-AI. I’m not even an economist and it’s a trivial argument, yet—there it is.
Even if one had a formal proof that AGIs must always be outperformed, that still would not show that AGIs will not be worth developing. You need a far more impressive argument covering all economic possibilities, especially since software & AI techniques are so economically valuable these days with no sign of interest letting up so handwaving arguments look implausible.
(I would be deeply amused to see a libertarian like Nick Szabo try to do such a thing because it runs so contrary to cherished libertarian beliefs about the value of local knowledge or the uselessness of elites or the weakness of theory, though I know he won’t.)
Oddly, it seems to me that anthropomorphization is what makes people think AGI is perfectly safe.
Yeah, you treat the concept of new technologies (even though we experience new technologies every single year) on the same level as ‘miracles’ (which we’ve never experienced at all). I get that.
And I’ve seen lots of religious people argue thusly: “You believe in ‘electrons’ and ‘quarks’ that you’ve never seen with your own eyes, and I believe in angels and demons that I’ve never seen either. Therefore your ‘scientific’ ideas are just as faith-inspired as mine.”
If we’re to throw guilt-by-perceived association around, then I think that your criticism of LW-ideas are typically- religious. You’re following the typical argument of the religious, where you try to claim all belief in things unseen is equally reasonable, all expectations of the future are equally reasonable, and hence “see, you’re also a religion after all”.
I think I’ll have to revise my position—you are really not saying anything worth hearing.,.,
It’s supposed to prevent people from feeding the trolls.
I don’t think this is a meaningful reply, or perhaps it’s just question-begging.
If having a coherent goal is the point of the human in the loop, then you are quietly ignoring the hypothetical given that ‘every human skill has been transferred’ and your points are irrelevant. If having a coherent goal is not what the human is supposed to be doing, well, every agent can be considered to ‘exhibit unanticipated behavior’ from the point of view of its constituent parts (what human behavior would you anticipate from a single hair cell?), and it doesn’t matter what the behavior of the complex system is—just that there is behavior. We can even layer on evolutionary concerns here: these complex systems will be selected upon and only the ones that act like agents will survive and spread!
Yeah, whatever.
Arguing against ‘necessarily leading to a super-intelligent but essentially human-like mind’ is a big part of Eliezer and LW’s AI paradigm in general going back to the earliest writings & motivation for SIAI & LW, one of our perennial criticisms of mainstream SF, AI writers, and ‘machine ethics’ writers in particular, and a key reason for the perpetual interest by LWers in unusual models of intelligence like AIXI or in exotic kinds of decision theories.
If you’ve failed to realize this so profoundly that you can seriously write the above—accusing LW of naive religious-style anthropomorphizing! - all I can conclude is that you either are very dense or have not read much material.
I don’t follow. My comment http://lesswrong.com/lw/f53/now_i_appreciate_agency/7q56 was not at any point in the negative, much less the −5 or whatever that would cause the new karma penalty thing to kick in.
If every human skill has been transferred, including that of employing or combining specialized-AIs, then in what sense do the groups of specialized-AIs not then comprise an AGI?
This argument would seem to reduce you to confronting a dilemma: if every human skill has been transferred to specialized-AIs, then a complex of specialized-AIs by definition now forms an AGI which outperforms all humans; if not every human skill has been transferred, such as employing specialized-AIs, then there is the very large economic niche for AGIs which I have identified with my Amdahl’s law argument. So either there exist AGI which outperform all humans, or there exists economic pressure for AGI.
I believe the penalty now applies if any comment upstream has the requisite negative value.
Oh. I thought it was just for replying to the comment which was negative. I guess this is what Wedrifid or whomever meant when they pointed out that the feature could strike in unexpected places...
Indeed. It can be very annoying to reply to a positive-karma comment and discover you will be charged 5 karma for the privilege.
I want someone to undo this part, if not the whole thing. Discouraging people from replying to people who are unpopular or wrong is bad. Preventing new users who are perceived as wrong from defending themselves is extremely bad.
If you don’t want to discourage replies to downvoted comments, then you want to undo the whole thing. That’s what this feature is for. It shouldn’t be doing anything else, and if it is then that’s a mistake that should be corrected.
Regardless of whether or not we should discourage replies to downvoted comments, we should avoid discouraging replies to the replies to downvoted comments. People who are downvoted should not be discouraged from speaking up about their ideas, even if those ideas are bad. That’s the way that those people go about improving.
Additionally, if they’re discouraged from defending their ideas in more detail or from addressing criticisms, but they actually happened to be correct or at least to make a good point, then discouraging them is an extremely bad idea.
Oh, agreed.
Please clarify this plainly for me: Are you saying these technologies will NEVER be developed? Not in 25 year, nor in 100 years, nor in 500 years, nor in 10,000 years?
Is your whole disagreement a matter of timescales—whether it is likely to happen to happen within our lifetimes or not?
Because if so, then there are a lot of us here who likewise don’t expect to see AGI in our lifetimes.
If you’re not saying “It will NEVER happen” then please specify a date by which time you’d assigning Probability > 50% of these technologies to have happened.
But until then, again your whole argument seems to be “it hasn’t happened yet, so it will never happen.”